Mobile Edge Computing reduces latency and response time by bringing computational resources closer to end-user. However, user mobility poses a significant challenge, as users continuously move between coverage areas of different edge nodes with limited range. This dynamic environment demands efficient scheduling mechanisms that can adapt to user movement while meeting application deadlines and optimizing edge resource utilization. This paper proposes an approach for scheduling based on Deep Reinforcement Learning, specifically using an Advantage Actor-Critic architecture within a Fog and Edge computing framework for IoT applications. The method enables distributed decision-making by deploying actor agents at edge nodes and a centralized critic at the fog node, facilitating continuous adaptation through system-wide feedback. User mobility is addressed using location prediction via RNN models embedded at each edge node, allowing proactive and informed offloading decisions. Experimental results demonstrate the proposed approach significantly improves task completion rate by 50%, failure rate by 26%, and response latency by 60%, while also adapting well to dynamic environments, outperforming state-of-the-art methods in real-world-inspired scenarios.